Ceph : Configure Ceph Cluster
2014/06/11 |
Install distributed storage Ceph.
For example, Configure Ceph Cluster like following environment. | +------------------+ | +-----------------+ | [ Admin Node ] |10.0.0.80 | 10.0.0.30| [ Client PC ] | | Ceph-Deploy |-----------+-----------| | | Meta Data Server | | | | +------------------+ | +-----------------+ | +---------------------------+--------------------------+ | | | |10.0.0.81 |10.0.0.82 |10.0.0.83 +-------+----------+ +--------+---------+ +--------+---------+ | [ Ceph Node #1 ] | | [ Ceph Node #2 ] | | [ Ceph Node #3 ] | | Monitor Daemon +-------+ Monitor Daemon +-------+ Monitor Daemon | | Object Storage | | Object Storage | | Object Storage | +------------------+ +------------------+ +------------------+ |
[1] | First, Configure like follows on all Ceph Nodes like Admin Node and Storage Nodes. Any user is OK to set in sudoers, it is used as a Ceph admin user after this. |
trusty@ceph01:~$ trusty@ceph01:~$ |
[2] | Create and send SSH key-pairs to connect to Ceph Nodes with non-passphrase. |
trusty@ceph-mds:~$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/trusty/.ssh/id_rsa): # Enter Created directory '/home/trusty/.ssh'. Enter passphrase (empty for no passphrase): # Enter Enter same passphrase again: # Enter Your identification has been saved in /home/trusty/.ssh/id_rsa. Your public key has been saved in /home/trusty/.ssh/id_rsa.pub. The key fingerprint is: 8e:34:cb:bf:52:d5:19:7c:ec:05:60:e1:da:7c:62:6f trusty@dlp The key's randomart image is: ... ...
trusty@ceph-mds:~$
vi ~/.ssh/config # create new ( define all Ceph Nodes and user ) Host ceph-mds Hostname ceph-mds.srv.world User trusty Host ceph01 Hostname ceph01.srv.world User trusty Host ceph02 Hostname ceph02.srv.world User trusty Host ceph03 Hostname ceph03.srv.world User trusty # send SSH public key to a Node trusty@ceph-mds:~$ ssh-copy-id ceph01 The authenticity of host 'ceph01.srv.world (10.0.0.81)' can't be established. ECDSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:2b:4b:6e. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys trusty@ceph01.srv.world's password: # password for the user Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'ceph01'" and check to make sure that only the key(s) you wanted were added. # send to others too trusty@ceph-mds:~$ ssh-copy-id ceph02 trusty@ceph-mds:~$ ssh-copy-id ceph03 |
[3] | Configure Ceph Node on Admin Node. |
# configure cluster trusty@ceph-mds:~/ceph$ ceph-deploy new ceph01 ceph02 ceph03
# install Ceph on all Nodes trusty@ceph-mds:~/ceph$ ceph-deploy install ceph01 ceph02 ceph03
# initial configuration for monitoring and keys trusty@ceph-mds:~/ceph$ ceph-deploy mon create-initial |
[4] | Configure storage on Admin Node. On this example, create directories /storage01, /storage02, /storage03 on each Nodes ceph01, ceph02, ceph03 before settings below. |
# prepare Object Storage Daemon trusty@ceph-mds:~/ceph$ ceph-deploy osd prepare ceph01:/storage01 ceph02:/storage02 ceph03:/storage03 # activate Object Storage Daemon trusty@ceph-mds:~/ceph$ ceph-deploy osd activate ceph01:/storage01 ceph02:/storage02 ceph03:/storage03 # Configure Meta Data Server trusty@ceph-mds:~/ceph$ ceph-deploy admin ceph-mds trusty@ceph-mds:~/ceph$ ceph-deploy mds create ceph-mds # show status trusty@ceph-mds:~/ceph$ ceph mds stat e4: 1/1/1 up {0=ceph-mds=up:active} trusty@ceph-mds:~/ceph$ ceph health HEALTH_OK # turn to "OK" if it's no ploblem after few minutes later |